The returns are weakly stationary with little instances of high correlations. p=4, q=4. It needs ARIMA model.
Code
ggAcf(returns_ups^2, na.action = na.pass)
Code
ggPacf(returns_ups^2, na.action = na.pass)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.
The returns are weakly stationary with little instances of high correlations. p=1,4, q=1,4,5. It needs ARIMA model.
Code
ggAcf(returns_jbht^2, na.action = na.pass)
Code
ggPacf(returns_jbht^2, na.action = na.pass)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.
p d q AIC BIC AICc
43 3 0 3 -20178.72 -20123.05 -20178.67
Code
temp[which.min(temp$BIC),]#0,1,0
p d q AIC BIC AICc
2 0 1 0 -20155.12 -20142.74 -20155.11
Code
temp[which.min(temp$AICc),]
p d q AIC BIC AICc
43 3 0 3 -20178.72 -20123.05 -20178.67
The lowest AIC model is ARIMA(3,0,3).
Code
# model diagnosticssarima(log.ups, 0,1,0)
initial value -4.226615
iter 1 value -4.226615
final value -4.226615
converged
initial value -4.226615
iter 1 value -4.226615
final value -4.226615
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
constant 4e-04 2e-04 1.5556 0.1199
sigma^2 estimated as 0.0002132107 on 3589 degrees of freedom
AIC = -5.614238 AICc = -5.614238 BIC = -5.610792
Code
sarima(log.ups, 5,0,3)
initial value -0.791711
iter 2 value -4.047115
iter 3 value -4.050940
iter 4 value -4.207225
iter 5 value -4.226742
iter 6 value -4.227080
iter 7 value -4.227405
iter 8 value -4.227452
iter 9 value -4.227478
iter 10 value -4.227520
iter 11 value -4.227703
iter 12 value -4.228076
iter 13 value -4.228890
iter 14 value -4.229145
iter 15 value -4.229363
iter 16 value -4.229561
iter 17 value -4.229591
iter 18 value -4.229612
iter 19 value -4.229615
iter 20 value -4.229620
iter 20 value -4.229620
final value -4.229620
converged
initial value -4.227046
iter 2 value -4.227048
iter 3 value -4.227070
iter 4 value -4.227087
iter 5 value -4.227141
iter 6 value -4.227224
iter 7 value -4.227408
iter 8 value -4.227573
iter 9 value -4.227742
iter 10 value -4.227786
iter 11 value -4.227789
iter 12 value -4.227791
iter 13 value -4.227810
iter 14 value -4.227835
iter 15 value -4.227875
iter 16 value -4.227896
iter 17 value -4.227904
iter 18 value -4.227905
iter 19 value -4.227906
iter 20 value -4.227907
iter 21 value -4.227907
iter 22 value -4.227907
iter 23 value -4.227908
iter 24 value -4.227908
iter 25 value -4.227913
iter 26 value -4.227924
iter 27 value -4.227926
iter 28 value -4.227934
iter 29 value -4.227936
iter 30 value -4.227938
iter 31 value -4.227940
iter 32 value -4.227942
iter 33 value -4.227944
iter 34 value -4.227945
iter 35 value -4.227946
iter 36 value -4.227947
iter 37 value -4.227947
iter 38 value -4.227947
iter 39 value -4.227947
iter 39 value -4.227947
final value -4.227947
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
ar1 0.6402 0.0990 6.4681 0.0000
ar2 -0.0559 0.1290 -0.4333 0.6648
ar3 -0.3082 0.1230 -2.5050 0.0123
ar4 0.6686 0.1138 5.8733 0.0000
ar5 0.0547 0.0177 3.0918 0.0020
ma1 0.3463 0.0983 3.5224 0.0004
ma2 0.4166 0.0732 5.6908 0.0000
ma3 0.7244 0.1066 6.7984 0.0000
xmean 4.4777 0.5389 8.3090 0.0000
sigma^2 estimated as 0.0002121959 on 3582 degrees of freedom
AIC = -5.612448 AICc = -5.612434 BIC = -5.595221
According to the model diagnostics, ARIMA(0,1,0) is the better model.
Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.
p d q AIC BIC AICc
41 3 0 2 -19336.13 -19286.65 -19336.09
Code
temp[which.min(temp$BIC),]#0,1,0
p d q AIC BIC AICc
2 0 1 0 -19310.12 -19297.75 -19310.12
Code
temp[which.min(temp$AICc),]
p d q AIC BIC AICc
41 3 0 2 -19336.13 -19286.65 -19336.09
The lowest AIC model is ARIMA(3,0,2).
Code
# model diagnosticssarima(log.jbht, 0,1,0)
initial value -4.108928
iter 1 value -4.108928
final value -4.108928
converged
initial value -4.108928
iter 1 value -4.108928
final value -4.108928
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
constant 5e-04 3e-04 1.9063 0.0567
sigma^2 estimated as 0.0002697929 on 3589 degrees of freedom
AIC = -5.378865 AICc = -5.378864 BIC = -5.375418
Code
sarima(log.jbht, 3,0,2)
initial value -0.618808
iter 2 value -0.677764
iter 3 value -0.828776
iter 4 value -1.767697
iter 5 value -2.620695
iter 6 value -3.714623
iter 7 value -3.793820
iter 8 value -4.033062
iter 9 value -4.079833
iter 10 value -4.097624
iter 11 value -4.102843
iter 12 value -4.109163
iter 13 value -4.109191
iter 14 value -4.109193
iter 15 value -4.109193
iter 16 value -4.109198
iter 17 value -4.109208
iter 18 value -4.109249
iter 19 value -4.109996
iter 20 value -4.110005
iter 21 value -4.110104
iter 22 value -4.110270
iter 23 value -4.110285
iter 24 value -4.110287
iter 25 value -4.110302
iter 26 value -4.110306
iter 27 value -4.110308
iter 28 value -4.110308
iter 29 value -4.110312
iter 30 value -4.110319
iter 31 value -4.110341
iter 32 value -4.110408
iter 33 value -4.110411
iter 34 value -4.110454
iter 35 value -4.110516
iter 36 value -4.110527
iter 37 value -4.110530
iter 38 value -4.110552
iter 39 value -4.110563
iter 40 value -4.110570
iter 41 value -4.110573
iter 42 value -4.110578
iter 43 value -4.110590
iter 44 value -4.110623
iter 45 value -4.110710
iter 46 value -4.110716
iter 47 value -4.110741
iter 48 value -4.110822
iter 49 value -4.110859
iter 50 value -4.110860
iter 51 value -4.110967
iter 52 value -4.110994
iter 53 value -4.111022
iter 54 value -4.111114
iter 55 value -4.111228
iter 56 value -4.111458
iter 57 value -4.111633
iter 58 value -4.111683
iter 59 value -4.111684
iter 60 value -4.111685
iter 61 value -4.111690
iter 62 value -4.111706
iter 63 value -4.111716
iter 64 value -4.111720
iter 65 value -4.111720
iter 65 value -4.111720
iter 65 value -4.111720
final value -4.111720
converged
initial value -4.110351
iter 2 value -4.110351
iter 3 value -4.110357
iter 4 value -4.110450
iter 5 value -4.110492
iter 6 value -4.110498
iter 7 value -4.110500
iter 8 value -4.110514
iter 9 value -4.110535
iter 10 value -4.110566
iter 11 value -4.110585
iter 12 value -4.110595
iter 13 value -4.110596
iter 14 value -4.110597
iter 14 value -4.110597
final value -4.110597
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
ar1 -0.6804 0.0548 -12.4157 0
ar2 0.8165 0.0432 18.9006 0
ar3 0.8628 0.0764 11.2902 0
ma1 1.6492 0.0609 27.0799 0
ma2 0.8232 0.0851 9.6779 0
xmean 4.4852 0.5295 8.4703 0
sigma^2 estimated as 0.0002683415 on 3585 degrees of freedom
AIC = -5.379419 AICc = -5.379412 BIC = -5.36736
According to the model diagnostics, ARIMA(3,0,2) is the better model.
Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.
Series: log.ups
ARIMA(0,1,0)
sigma^2 = 0.0002134: log likelihood = 10078.34
AIC=-20154.68 AICc=-20154.68 BIC=-20148.5
Training set error measures:
ME RMSE MAE MPE MAPE
Training set 0.0003808867 0.01460477 0.009700401 0.008393482 0.2164048
MASE ACF1
Training set 0.9998257 -0.01479948
Code
res.ups <- fit.ups$resggtsdisplay(res.ups^2)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.
Code
model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.ups,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))
Series: log.jbht
ARIMA(3,0,2) with non-zero mean
Coefficients:
ar1 ar2 ar3 ma1 ma2 mean
-0.6804 0.8165 0.8628 1.6492 0.8232 4.4852
s.e. 0.0548 0.0432 0.0764 0.0609 0.0851 0.5295
sigma^2 = 0.0002688: log likelihood = 9665.75
AIC=-19317.49 AICc=-19317.46 BIC=-19274.19
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set 0.0005153677 0.01638113 0.01181423 0.01093111 0.2688153 0.9998603
ACF1
Training set -0.004793327
Code
res.jbht <- fit.jbht$resggtsdisplay(res.jbht^2)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.
Code
model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.jbht,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))
Series: log.ups
ARIMA(0,1,0)
sigma^2 = 0.0002134: log likelihood = 10078.34
AIC=-20154.68 AICc=-20154.68 BIC=-20148.5
Training set error measures:
ME RMSE MAE MPE MAPE
Training set 0.0003808867 0.01460477 0.009700401 0.008393482 0.2164048
MASE ACF1
Training set 0.9998257 -0.01479948
Code
summary(garchFit(~garch(1,4), res.ups,trace = F))
Title:
GARCH Modelling
Call:
garchFit(formula = ~garch(1, 4), data = res.ups, trace = F)
Mean and Variance Equation:
data ~ garch(1, 4)
<environment: 0x154ee1ad8>
[data = res.ups]
Conditional Distribution:
norm
Coefficient(s):
mu omega alpha1 beta1 beta2 beta3
3.3748e-04 1.9458e-06 6.1328e-02 1.6404e-01 1.0000e-08 2.1648e-01
beta4
5.5097e-01
Std. Errors:
based on Hessian
Error Analysis:
Estimate Std. Error t value Pr(>|t|)
mu 3.375e-04 2.048e-04 1.648 0.099332 .
omega 1.946e-06 5.721e-07 3.401 0.000671 ***
alpha1 6.133e-02 9.333e-03 6.571 5.00e-11 ***
beta1 1.640e-01 1.243e-01 1.319 0.187093
beta2 1.000e-08 1.024e-01 0.000 1.000000
beta3 2.165e-01 1.736e-01 1.247 0.212518
beta4 5.510e-01 1.161e-01 4.744 2.09e-06 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Log Likelihood:
10379.95 normalized: 2.890546
Description:
Thu Apr 11 16:09:27 2024 by user:
Standardised Residuals Tests:
Statistic p-Value
Jarque-Bera Test R Chi^2 1.741128e+04 0.0000000
Shapiro-Wilk Test R W 9.082065e-01 0.0000000
Ljung-Box Test R Q(10) 1.289984e+01 0.2293259
Ljung-Box Test R Q(15) 2.222985e+01 0.1019172
Ljung-Box Test R Q(20) 2.558735e+01 0.1798770
Ljung-Box Test R^2 Q(10) 5.098596e+00 0.8844951
Ljung-Box Test R^2 Q(15) 6.156418e+00 0.9769988
Ljung-Box Test R^2 Q(20) 7.572598e+00 0.9943390
LM Arch Test R TR^2 5.461723e+00 0.9407557
Information Criterion Statistics:
AIC BIC SIC HQIC
-5.777193 -5.765134 -5.777200 -5.772894
Code
checkresiduals(garch(res.jbht, order =c(4,1),trace = F))
Ljung-Box test
data: Residuals
Q* = 6.6616, df = 10, p-value = 0.757
Model df: 0. Total lags used: 10
The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the GARCH model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.
Best model: ARIMA(3,0,2)+GRACH(1,1)
Code
# returns_jbhtsummary(fit.jbht)
Series: log.jbht
ARIMA(3,0,2) with non-zero mean
Coefficients:
ar1 ar2 ar3 ma1 ma2 mean
-0.6804 0.8165 0.8628 1.6492 0.8232 4.4852
s.e. 0.0548 0.0432 0.0764 0.0609 0.0851 0.5295
sigma^2 = 0.0002688: log likelihood = 9665.75
AIC=-19317.49 AICc=-19317.46 BIC=-19274.19
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set 0.0005153677 0.01638113 0.01181423 0.01093111 0.2688153 0.9998603
ACF1
Training set -0.004793327
Title:
GARCH Modelling
Call:
garchFit(formula = ~garch(1, 1), data = res.jbht, trace = F)
Mean and Variance Equation:
data ~ garch(1, 1)
<environment: 0x1563aa920>
[data = res.jbht]
Conditional Distribution:
norm
Coefficient(s):
mu omega alpha1 beta1
4.9052e-04 5.6905e-06 4.4789e-02 9.3347e-01
Std. Errors:
based on Hessian
Error Analysis:
Estimate Std. Error t value Pr(>|t|)
mu 4.905e-04 2.467e-04 1.988 0.0468 *
omega 5.690e-06 1.401e-06 4.062 4.87e-05 ***
alpha1 4.479e-02 6.588e-03 6.798 1.06e-11 ***
beta1 9.335e-01 1.036e-02 90.147 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Log Likelihood:
9864.037 normalized: 2.746878
Description:
Thu Apr 11 16:09:27 2024 by user:
Standardised Residuals Tests:
Statistic p-Value
Jarque-Bera Test R Chi^2 1597.3918812 0.0000000
Shapiro-Wilk Test R W 0.9726371 0.0000000
Ljung-Box Test R Q(10) 6.5663567 0.7656506
Ljung-Box Test R Q(15) 10.6645352 0.7759922
Ljung-Box Test R Q(20) 11.8036396 0.9226687
Ljung-Box Test R^2 Q(10) 4.6228010 0.9149109
Ljung-Box Test R^2 Q(15) 11.1049966 0.7451164
Ljung-Box Test R^2 Q(20) 17.4146797 0.6259058
LM Arch Test R TR^2 10.7046610 0.5543844
Information Criterion Statistics:
AIC BIC SIC HQIC
-5.491527 -5.484636 -5.491530 -5.489071
Code
checkresiduals(garch(res.jbht, order =c(1,1),trace = F))
Ljung-Box test
data: Residuals
Q* = 6.453, df = 10, p-value = 0.7759
Model df: 0. Total lags used: 10
The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the ARIMA model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.